DonorsChoose.org receives hundreds of thousands of project proposals each year for classroom projects in need of funding. Right now, a large number of volunteers is needed to manually screen each submission before it's approved to be posted on the DonorsChoose.org website.
Next year, DonorsChoose.org expects to receive close to 500,000 project proposals. As a result, there are three main problems they need to solve:
The goal of the competition is to predict whether or not a DonorsChoose.org project proposal submitted by a teacher will be approved, using the text of project descriptions as well as additional metadata about the project, teacher, and school. DonorsChoose.org can then use this information to identify projects most likely to need further review before approval.
The train.csv data set provided by DonorsChoose contains the following features:
| Feature | Description |
|---|---|
project_id |
A unique identifier for the proposed project. Example: p036502 |
project_title |
Title of the project. Examples:
|
project_grade_category |
Grade level of students for which the project is targeted. One of the following enumerated values:
|
project_subject_categories |
One or more (comma-separated) subject categories for the project from the following enumerated list of values:
Examples:
|
school_state |
State where school is located (Two-letter U.S. postal code). Example: WY |
project_subject_subcategories |
One or more (comma-separated) subject subcategories for the project. Examples:
|
project_resource_summary |
An explanation of the resources needed for the project. Example:
|
project_essay_1 |
First application essay* |
project_essay_2 |
Second application essay* |
project_essay_3 |
Third application essay* |
project_essay_4 |
Fourth application essay* |
project_submitted_datetime |
Datetime when project application was submitted. Example: 2016-04-28 12:43:56.245 |
teacher_id |
A unique identifier for the teacher of the proposed project. Example: bdf8baa8fedef6bfeec7ae4ff1c15c56 |
teacher_prefix |
Teacher's title. One of the following enumerated values:
|
teacher_number_of_previously_posted_projects |
Number of project applications previously submitted by the same teacher. Example: 2 |
* See the section Notes on the Essay Data for more details about these features.
Additionally, the resources.csv data set provides more data about the resources required for each project. Each line in this file represents a resource required by a project:
| Feature | Description |
|---|---|
id |
A project_id value from the train.csv file. Example: p036502 |
description |
Desciption of the resource. Example: Tenor Saxophone Reeds, Box of 25 |
quantity |
Quantity of the resource required. Example: 3 |
price |
Price of the resource required. Example: 9.95 |
Note: Many projects require multiple resources. The id value corresponds to a project_id in train.csv, so you use it as a key to retrieve all resources needed for a project:
The data set contains the following label (the value you will attempt to predict):
| Label | Description |
|---|---|
project_is_approved |
A binary flag indicating whether DonorsChoose approved the project. A value of 0 indicates the project was not approved, and a value of 1 indicates the project was approved. |
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
import sqlite3
import pandas as pd
import numpy as np
import nltk
import string
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import confusion_matrix
from sklearn import metrics
from sklearn.metrics import roc_curve, auc
from nltk.stem.porter import PorterStemmer
import re
# Tutorial about Python regular expressions: https://pymotw.com/2/re/
import string
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from gensim.models import Word2Vec
from gensim.models import KeyedVectors
import pickle
from tqdm import tqdm
import os
from plotly import plotly
import plotly.offline as offline
import plotly.graph_objs as go
offline.init_notebook_mode()
from collections import Counter
project_data = pd.read_csv('train_data.csv')
resource_data = pd.read_csv('resources.csv')
print("Number of data points in train data", project_data.shape)
print('-'*50)
print("The attributes of data :", project_data.columns.values)
# how to replace elements in list python: https://stackoverflow.com/a/2582163/4084039
cols = ['Date' if x=='project_submitted_datetime' else x for x in list(project_data.columns)]
#sort dataframe based on time pandas python: https://stackoverflow.com/a/49702492/4084039
project_data['Date'] = pd.to_datetime(project_data['project_submitted_datetime'])
project_data.drop('project_submitted_datetime', axis=1, inplace=True)
project_data.sort_values(by=['Date'], inplace=True)
# how to reorder columns pandas python: https://stackoverflow.com/a/13148611/4084039
project_data = project_data[cols]
project_data.head(2)
print("Number of data points in train data", resource_data.shape)
print(resource_data.columns.values)
resource_data.head(2)
project_subject_categories¶catogories = list(project_data['project_subject_categories'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
cat_list = []
for i in catogories:
temp = ""
# consider we have text like this "Math & Science, Warmth, Care & Hunger"
for j in i.split(','): # it will split it in three parts ["Math & Science", "Warmth", "Care & Hunger"]
if 'The' in j.split(): # this will split each of the catogory based on space "Math & Science"=> "Math","&", "Science"
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science"
temp+=j.strip()+" " #" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_') # we are replacing the & value into
cat_list.append(temp.strip())
project_data['clean_categories'] = cat_list
project_data.drop(['project_subject_categories'], axis=1, inplace=True)
from collections import Counter
my_counter = Counter()
for word in project_data['clean_categories'].values:
my_counter.update(word.split())
cat_dict = dict(my_counter)
sorted_cat_dict = dict(sorted(cat_dict.items(), key=lambda kv: kv[1]))
project_subject_subcategories¶sub_catogories = list(project_data['project_subject_subcategories'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
sub_cat_list = []
for i in sub_catogories:
temp = ""
# consider we have text like this "Math & Science, Warmth, Care & Hunger"
for j in i.split(','): # it will split it in three parts ["Math & Science", "Warmth", "Care & Hunger"]
if 'The' in j.split(): # this will split each of the catogory based on space "Math & Science"=> "Math","&", "Science"
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science"
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
sub_cat_list.append(temp.strip())
project_data['clean_subcategories'] = sub_cat_list
project_data.drop(['project_subject_subcategories'], axis=1, inplace=True)
# count of all the words in corpus python: https://stackoverflow.com/a/22898595/4084039
my_counter = Counter()
for word in project_data['clean_subcategories'].values:
my_counter.update(word.split())
sub_cat_dict = dict(my_counter)
sorted_sub_cat_dict = dict(sorted(sub_cat_dict.items(), key=lambda kv: kv[1]))
# merge two column text dataframe:
project_data["essay"] = project_data["project_essay_1"].map(str) +\
project_data["project_essay_2"].map(str) + \
project_data["project_essay_3"].map(str) + \
project_data["project_essay_4"].map(str)
project_data.head(2)
#### 1.4.2.3 Using Pretrained Models: TFIDF weighted W2V
# printing some random reviews
print(project_data['essay'].values[0])
print("="*50)
print(project_data['essay'].values[150])
print("="*50)
print(project_data['essay'].values[1000])
print("="*50)
print(project_data['essay'].values[20000])
print("="*50)
print(project_data['essay'].values[99999])
print("="*50)
# https://stackoverflow.com/a/47091490/4084039
import re
def decontracted(phrase):
# specific
phrase = re.sub(r"won't", "will not", phrase)
phrase = re.sub(r"can\'t", "can not", phrase)
# general
phrase = re.sub(r"n\'t", " not", phrase)
phrase = re.sub(r"\'re", " are", phrase)
phrase = re.sub(r"\'s", " is", phrase)
phrase = re.sub(r"\'d", " would", phrase)
phrase = re.sub(r"\'ll", " will", phrase)
phrase = re.sub(r"\'t", " not", phrase)
phrase = re.sub(r"\'ve", " have", phrase)
phrase = re.sub(r"\'m", " am", phrase)
return phrase
sent = decontracted(project_data['essay'].values[20000])
print(sent)
print("="*50)
# \r \n \t remove from string python: http://texthandler.com/info/remove-line-breaks-python/
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
print(sent)
#remove spacial character: https://stackoverflow.com/a/5843547/4084039
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
print(sent)
# https://gist.github.com/sebleier/554280
# we are removing the words from the stop words list: 'no', 'nor', 'not'
stopwords= ['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', "you're", "you've",\
"you'll", "you'd", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', \
'she', "she's", 'her', 'hers', 'herself', 'it', "it's", 'its', 'itself', 'they', 'them', 'their',\
'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', "that'll", 'these', 'those', \
'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', \
'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', \
'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after',\
'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further',\
'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more',\
'most', 'other', 'some', 'such', 'only', 'own', 'same', 'so', 'than', 'too', 'very', \
's', 't', 'can', 'will', 'just', 'don', "don't", 'should', "should've", 'now', 'd', 'll', 'm', 'o', 're', \
've', 'y', 'ain', 'aren', "aren't", 'couldn', "couldn't", 'didn', "didn't", 'doesn', "doesn't", 'hadn',\
"hadn't", 'hasn', "hasn't", 'haven', "haven't", 'isn', "isn't", 'ma', 'mightn', "mightn't", 'mustn',\
"mustn't", 'needn', "needn't", 'shan', "shan't", 'shouldn', "shouldn't", 'wasn', "wasn't", 'weren', "weren't", \
'won', "won't", 'wouldn', "wouldn't"]
# Combining all the above stundents
from tqdm import tqdm
preprocessed_essays = []
# tqdm is for printing the status bar
for sentance in tqdm(project_data['essay'].values):
sent = decontracted(sentance)
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
# https://gist.github.com/sebleier/554280
sent = ' '.join(e for e in sent.split() if e.lower() not in stopwords)
preprocessed_essays.append(sent.lower().strip())
# after preprocesing
preprocessed_essays[20000]
# Updating dataframe for clean project title and remove old project title
project_data['clean_essay'] = preprocessed_essays
project_data.drop(['essay'], axis=1, inplace=True)
project_data.head(2)
# similarly you can preprocess the titles also
# Combining all the above stundents
from tqdm import tqdm
preprocessed_title = []
# tqdm is for printing the status bar
for sentance in tqdm(project_data['project_title'].values):
sent = decontracted(sentance)
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
# https://gist.github.com/sebleier/554280
sent = ' '.join(e for e in sent.split() if e.lower() not in stopwords)
preprocessed_title.append(sent.lower().strip())
# after preprocesing
preprocessed_title[20000]
# Updating dataframe for clean project title and remove old project title
project_data['clean_project_title'] = preprocessed_title
project_data.drop(['project_title'], axis=1, inplace=True)
project_data.head(2)
project_data.columns
we are going to consider
- school_state : categorical data
- clean_categories : categorical data
- clean_subcategories : categorical data
- project_grade_category : categorical data
- teacher_prefix : categorical data
- project_title : text data
- text : text data
- project_resource_summary: text data (optinal)
- quantity : numerical (optinal)
- teacher_number_of_previously_posted_projects : numerical
- price : numerical
# we use count vectorizer to convert the values into one
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(vocabulary=list(sorted_cat_dict.keys()), lowercase=False, binary=True)
categories_one_hot = vectorizer.fit_transform(project_data['clean_categories'].values)
print(vectorizer.get_feature_names())
print("Shape of matrix after one hot encodig ",categories_one_hot.shape)
# we use count vectorizer to convert the values into one
vectorizer = CountVectorizer(vocabulary=list(sorted_sub_cat_dict.keys()), lowercase=False, binary=True)
sub_categories_one_hot = vectorizer.fit_transform(project_data['clean_subcategories'].values)
print(vectorizer.get_feature_names())
print("Shape of matrix after one hot encodig ",sub_categories_one_hot.shape)
# you can do the similar thing with state, teacher_prefix and project_grade_category also
# We are considering only the words which appeared in at least 10 documents(rows or projects).
vectorizer = CountVectorizer(min_df=10)
text_bow = vectorizer.fit_transform(preprocessed_essays)
print("Shape of matrix after one hot encodig ",text_bow.shape)
# you can vectorize the title also
# before you vectorize the title make sure you preprocess it
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(min_df=10)
text_tfidf = vectorizer.fit_transform(preprocessed_essays)
print("Shape of matrix after one hot encodig ",text_tfidf.shape)
'''
# Reading glove vectors in python: https://stackoverflow.com/a/38230349/4084039
def loadGloveModel(gloveFile):
print ("Loading Glove Model")
f = open(gloveFile,'r', encoding="utf8")
model = {}
for line in tqdm(f):
splitLine = line.split()
word = splitLine[0]
embedding = np.array([float(val) for val in splitLine[1:]])
model[word] = embedding
print ("Done.",len(model)," words loaded!")
return model
model = loadGloveModel('glove.42B.300d.txt')
# ============================
Output:
Loading Glove Model
1917495it [06:32, 4879.69it/s]
Done. 1917495 words loaded!
# ============================
words = []
for i in preproced_texts:
words.extend(i.split(' '))
for i in preproced_titles:
words.extend(i.split(' '))
print("all the words in the coupus", len(words))
words = set(words)
print("the unique words in the coupus", len(words))
inter_words = set(model.keys()).intersection(words)
print("The number of words that are present in both glove vectors and our coupus", \
len(inter_words),"(",np.round(len(inter_words)/len(words)*100,3),"%)")
words_courpus = {}
words_glove = set(model.keys())
for i in words:
if i in words_glove:
words_courpus[i] = model[i]
print("word 2 vec length", len(words_courpus))
# stronging variables into pickle files python: http://www.jessicayung.com/how-to-use-pickle-to-save-and-load-variables-in-python/
import pickle
with open('glove_vectors', 'wb') as f:
pickle.dump(words_courpus, f)
'''
# stronging variables into pickle files python: http://www.jessicayung.com/how-to-use-pickle-to-save-and-load-variables-in-python/
# make sure you have the glove_vectors file
with open('glove_vectors', 'rb') as f:
model = pickle.load(f)
glove_words = set(model.keys())
# average Word2Vec
# compute average word2vec for each review.
avg_w2v_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(preprocessed_essays): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_vectors.append(vector)
print(len(avg_w2v_vectors))
print(len(avg_w2v_vectors[0]))
# S = ["abc def pqr", "def def def abc", "pqr pqr def"]
tfidf_model = TfidfVectorizer()
tfidf_model.fit(preprocessed_essays)
# we are converting a dictionary with word as a key, and the idf as a value
dictionary = dict(zip(tfidf_model.get_feature_names(), list(tfidf_model.idf_)))
tfidf_words = set(tfidf_model.get_feature_names())
# average Word2Vec
# compute average word2vec for each review.
tfidf_w2v_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(preprocessed_essays): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_vectors.append(vector)
print(len(tfidf_w2v_vectors))
print(len(tfidf_w2v_vectors[0]))
# Similarly you can vectorize for title also
price_data = resource_data.groupby('id').agg({'price':'sum', 'quantity':'sum'}).reset_index()
project_data = pd.merge(project_data, price_data, on='id', how='left')
# check this one: https://www.youtube.com/watch?v=0HOqOcln3Z4&t=530s
# standardization sklearn: https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
from sklearn.preprocessing import StandardScaler
# price_standardized = standardScalar.fit(project_data['price'].values)
# this will rise the error
# ValueError: Expected 2D array, got 1D array instead: array=[725.05 213.03 329. ... 399. 287.73 5.5 ].
# Reshape your data either using array.reshape(-1, 1)
price_scalar = StandardScaler()
price_scalar.fit(project_data['price'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
print(f"Mean : {price_scalar.mean_[0]}, Standard deviation : {np.sqrt(price_scalar.var_[0])}")
# Now standardize the data with above maen and variance.
price_standardized = price_scalar.transform(project_data['price'].values.reshape(-1, 1))
price_standardized
print(categories_one_hot.shape)
print(sub_categories_one_hot.shape)
print(text_bow.shape)
print(price_standardized.shape)
# merge two sparse matrices: https://stackoverflow.com/a/19710648/4084039
from scipy.sparse import hstack
# with the same hstack function we are concatinating a sparse matrix and a dense matirx :)
X = hstack((categories_one_hot, sub_categories_one_hot, text_bow, price_standardized))
X.shape

from sklearn.datasets import load_digits
from sklearn.feature_selection import SelectKBest, chi2
X, y = load_digits(return_X_y=True)
X.shape
X_new = SelectKBest(chi2, k=20).fit_transform(X, y)
X_new.shape
========
output:
(1797, 64)
(1797, 20)
# Combine the train.csv and resource.csv
# https://stackoverflow.com/questions/22407798/how-to-reset-a-dataframes-indexes-for-all-groups-in-one-step
price_data = resource_data.groupby('id').agg({'price':'sum', 'quantity':'sum'}).reset_index()
# join two dataframes in python:
project_data = pd.merge(project_data, price_data, on='id', how='left')
# Take 50k dataset ----> Tried but memory error
# Take 40k dataset
from sklearn.model_selection import train_test_split
# remove unnecessary column: https://cmdlinetips.com/2018/04/how-to-drop-one-or-more-columns-in-pandas-dataframe/
project_data = project_data.drop(['Unnamed: 0','id','teacher_id','Date'], axis=1)
# https://www.geeksforgeeks.org/python-pandas-dataframe-sample/
project_data = project_data.sample(n=50000)
project_data = project_data[pd.notnull(project_data['teacher_prefix'])]
project_data.shape
project_data.head()
# please write all the code with proper documentation, and proper titles for each subsection
# go through documentations and blogs before you start coding
# first figure out what to do, and then think about how to do.
# reading and understanding error messages will be very much helpfull in debugging your code
# when you plot any graph make sure you use
# a. Title, that describes your plot, this will be very helpful to the reader
# b. Legends if needed
# c. X-axis label
# d. Y-axis label
# Split train and test with both 50%
tr_X, ts_X, tr_y, ts_y, = train_test_split(project_data, project_data['project_is_approved'], test_size=0.33, random_state=1, stratify=project_data['project_is_approved'].values)
tr_X = tr_X.reset_index(drop=True)
ts_X = ts_X.reset_index(drop=True)
# # Split train data further with 70% train data and 30% cv data
tr_X, cv_X, tr_y, cv_y = train_test_split(tr_X, tr_y, test_size=0.33, random_state=1, stratify=tr_y)
tr_X = tr_X.reset_index(drop=True)
ts_X = ts_X.reset_index(drop=True)
cv = cv_X.reset_index(drop=True)
tr_X.drop(['project_is_approved'], axis=1, inplace=True)
ts_X.drop(['project_is_approved'], axis=1, inplace=True)
cv_X.drop(['project_is_approved'], axis=1, inplace=True)
print('Shape of train data:', tr_X.shape)
print('Shape of test data:', ts_X.shape)
print('Shape of CV data', cv_X.shape)
# please write all the code with proper documentation, and proper titles for each subsection
# go through documentations and blogs before you start coding
# first figure out what to do, and then think about how to do.
# reading and understanding error messages will be very much helpfull in debugging your code
# make sure you featurize train and test data separatly
# when you plot any graph make sure you use
# a. Title, that describes your plot, this will be very helpful to the reader
# b. Legends if needed
# c. X-axis label
# d. Y-axis label
# For Numerical with train data
### 1) quantity
# We are going to represent the quantity, as numerical values within the range 0-1
# normalization sklearn: https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
# quantity_normalized = standardScalar.fit(project_data['quantity'].values)
# this will rise the error
# ValueError: Expected 2D array, got 1D array instead: array=[725.05 213.03 329. ... 399. 287.73 5.5 ].
# Reshape your data either using array.reshape(-1, 1)
from sklearn.preprocessing import StandardScaler
quantity_scalar = StandardScaler()
quantity_scalar.fit(tr_X['quantity'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
print(f"for quantity -> Mean : {quantity_scalar.mean_[0]}, Standard deviation : {np.sqrt(quantity_scalar.var_[0])}")
# Now standardize the data with above mean and variance.
quantity_normalized = quantity_scalar.transform(tr_X['quantity'].values.reshape(-1, 1))
quantity_normalized.shape
### 2) price
# the cost feature is already in numerical values, we are going to represent the money, as numerical values within the range 0-1
# normalization sklearn: https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
price_scalar = StandardScaler()
price_scalar.fit(tr_X['price'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
print(f"for price: Mean -> {price_scalar.mean_[0]}, Standard deviation : {np.sqrt(price_scalar.var_[0])}")
# Now standardize the data with above mean and variance.
price_normalized = price_scalar.transform(tr_X['price'].values.reshape(-1, 1))
price_normalized.shape
### 3) For teacher_number_of_previously_projects
# We are going to represent the teacher_number_of_previously_posted_projects, as numerical values within the range 0-1
# normalization sklearn: https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
teacher_number_of_previously_posted_projects_scalar = StandardScaler()
teacher_number_of_previously_posted_projects_scalar.fit(tr_X['teacher_number_of_previously_posted_projects'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
print(f"for teacher_number_of_previously_posted_projects -> Mean : {teacher_number_of_previously_posted_projects_scalar.mean_[0]}, Standard deviation : {np.sqrt(teacher_number_of_previously_posted_projects_scalar.var_[0])}")
# Now standardize the data with above mean and variance.
teacher_number_of_previously_posted_projects_normalized = teacher_number_of_previously_posted_projects_scalar.transform(tr_X['teacher_number_of_previously_posted_projects'].values.reshape(-1, 1))
print('Shape of quantity:', quantity_normalized.shape)
print('Shape of price:', price_normalized.shape)
print('Shape of teacher_number_of_previously_posted_projects:', teacher_number_of_previously_posted_projects_normalized.shape)
# Transform numerical attributes for test data
ts_price = price_scalar.transform(ts_X['price'].values.reshape(-1,1))
ts_quantity = quantity_scalar.transform(ts_X['quantity'].values.reshape(-1,1))
ts_teacher_number_of_previously_posted_projects = \
teacher_number_of_previously_posted_projects_scalar.transform(ts_X['teacher_number_of_previously_posted_projects'].\
values.reshape(-1,1))
# tranform nmerical attributes for cv data
cv_price = price_scalar.transform(cv_X['price'].values.reshape(-1,1))
cv_quantity = quantity_scalar.transform(cv_X['quantity'].values.reshape(-1,1))
cv_teacher_number_of_previously_posted_projects = \
teacher_number_of_previously_posted_projects_scalar.transform(cv_X['teacher_number_of_previously_posted_projects'].\
values.reshape(-1,1))
print('--------------Test data--------------')
print('Shape of quantity:', ts_quantity.shape)
print('Shape of price:', ts_price.shape)
print('Shape of teacher_number_of_previously_posted_projects:', ts_teacher_number_of_previously_posted_projects.shape)
print('--------------CV data--------------')
print('Shape of quantity:', cv_quantity.shape)
print('Shape of price:', cv_price.shape)
print('Shape of teacher_number_of_previously_posted_projects:', cv_teacher_number_of_previously_posted_projects.shape)
# For categorical with train data
# Please do the similar feature encoding with state, teacher_prefix and project_grade_category also
# One hot encoding for school state
### 1) school_state
print('==================================================================\n')
# Count Vectorize with vocuabulary contains unique code of school state and we are doing boolen BoW
vectorizer_school_state = CountVectorizer(vocabulary=tr_X['school_state'].unique(), lowercase=False, binary=True)
vectorizer_school_state.fit(tr_X['school_state'].values)
print('List of feature in school_state',vectorizer_school_state.get_feature_names())
school_state_one_hot = vectorizer_school_state.transform(tr_X['school_state'].values)
print("\nShape of school_state matrix after one hot encoding ",school_state_one_hot.shape)
### 2) project_subject_categories
print('==================================================================\n')
vectorizer_categories = CountVectorizer(vocabulary=list(sorted_cat_dict.keys()), lowercase=False, binary=True)
vectorizer_categories.fit(tr_X['clean_categories'].values)
print('List of features in project_subject_categories',vectorizer_categories.get_feature_names())
categories_one_hot = vectorizer_categories.transform(tr_X['clean_categories'].values)
print("\nShape of project_subject_categories matrix after one hot encodig ",categories_one_hot.shape)
### 3) project_subject_subcategories
print('==================================================================\n')
vectorizer_subcategories = CountVectorizer(vocabulary=list(sorted_sub_cat_dict.keys()), lowercase=False, binary=True)
vectorizer_subcategories.fit(tr_X['clean_categories'].values)
print('List of features in project_subject_categories',vectorizer_subcategories.get_feature_names())
subcategories_one_hot = vectorizer_subcategories.transform(tr_X['clean_categories'].values)
print("\nShape of project_subject_subcategories matrix after one hot encodig ",subcategories_one_hot.shape)
### 4) project_grade_category
print('==================================================================\n')
# One hot encoding for project_grade_category
# Count Vectorize with vocuabulary contains unique code of project_grade_category and we are doing boolen BoW
vectorizer_grade_category = CountVectorizer(vocabulary=tr_X['project_grade_category'].unique(), lowercase=False, binary=True)
vectorizer_grade_category.fit(tr_X['project_grade_category'].values)
print('List of features in project_grade_category',vectorizer_grade_category.get_feature_names())
project_grade_category_one_hot = vectorizer_grade_category.transform(tr_X['project_grade_category'].values)
print("\nShape of project_grade_category matrix after one hot encodig ",project_grade_category_one_hot.shape)
### 5) teacher_prefix
print('==================================================================\n')
# One hot encoding for teacher_prefix
# Count Vectorize with vocuabulary contains unique code of teacher_prefix and we are doing boolen BoW
# Since some of the data is filled with nan. So we update the nan to 'None' as a string
tr_X['teacher_prefix'] = tr_X['teacher_prefix'].fillna('None')
vectorizer_teacher_prefix = CountVectorizer(vocabulary=tr_X['teacher_prefix'].unique(), lowercase=False, binary=True)
vectorizer_teacher_prefix.fit(tr_X['teacher_prefix'].values)
print('List of features in teacher_prefix',vectorizer_teacher_prefix.get_feature_names())
teacher_prefix_one_hot = vectorizer_teacher_prefix.transform(tr_X['teacher_prefix'].values)
print("\nShape of teacher_prefix matrix after one hot encoding ",teacher_prefix_one_hot.shape)
# Transform categorical for test data
ts_school_state = vectorizer_school_state.transform(ts_X['school_state'].values)
ts_project_subject_category = vectorizer_categories.transform(ts_X['clean_categories'].values)
ts_project_subject_subcategory = vectorizer_subcategories.transform(ts_X['clean_subcategories'].values)
ts_project_grade_category = vectorizer_grade_category.transform(ts_X['project_grade_category'].values)
ts_teacher_prefix = vectorizer_teacher_prefix.transform(ts_X['teacher_prefix'].values)
# Transform categorical for cv data
cv_school_state = vectorizer_school_state.transform(cv_X['school_state'].values)
cv_project_subject_category = vectorizer_categories.transform(cv_X['clean_categories'].values)
cv_project_subject_subcategory = vectorizer_subcategories.transform(cv_X['clean_subcategories'].values)
cv_project_grade_category = vectorizer_grade_category.transform(cv_X['project_grade_category'].values)
cv_teacher_prefix = vectorizer_teacher_prefix.transform(cv_X['teacher_prefix'].values)
print('--------------Test data--------------')
print('Shape of school_state:', ts_school_state.shape)
print('Shape of project_subject_categories:', ts_project_subject_category.shape)
print('Shape of project_subject_subcategories:', ts_project_subject_subcategory.shape)
print('Shape of project_grade_category:', ts_project_grade_category.shape)
print('Shape of teacher_prefix:', ts_teacher_prefix.shape)
print('--------------CV data--------------')
print('Shape of school_state:', cv_school_state.shape)
print('Shape of project_subject_categories:', cv_project_subject_category.shape)
print('Shape of project_subject_subcategories:', cv_project_subject_subcategory.shape)
print('Shape of project_grade_category:', cv_project_grade_category.shape)
print('Shape of teacher_prefix:', cv_teacher_prefix.shape)
# please write all the code with proper documentation, and proper titles for each subsection
# go through documentations and blogs before you start coding
# first figure out what to do, and then think about how to do.
# reading and understanding error messages will be very much helpfull in debugging your code
# make sure you featurize train and test data separatly
# when you plot any graph make sure you use
# a. Title, that describes your plot, this will be very helpful to the reader
# b. Legends if needed
# c. X-axis label
# d. Y-axis label
We already have preprocessed both essay and project_title in Text processing section (1.3 and 1.4) above
Apply KNN on different kind of featurization as mentioned in the instructions
For Every model that you work on make sure you do the step 2 and step 3 of instructions
# please write all the code with proper documentation, and proper titles for each subsection
# go through documentations and blogs before you start coding
# first figure out what to do, and then think about how to do.
# reading and understanding error messages will be very much helpfull in debugging your code
# when you plot any graph make sure you use
# a. Title, that describes your plot, this will be very helpful to the reader
# b. Legends if needed
# c. X-axis label
# d. Y-axis label
### BoW in Essay and Title on Train
# # We are considering only the words which appeared in at least 10 documents(rows or projects).
vectorizer_bow = CountVectorizer(min_df=20)
text_bow = vectorizer_bow.fit_transform(tr_X['clean_essay'].values)
print("Shape of essay matrix after one hot encodig on train",text_bow.shape)
# # Similarly you can vectorize for title also
vectorizer_bowt = CountVectorizer(min_df=20, max_features=5000)
title_bow = vectorizer_bowt.fit_transform(tr_X['clean_project_title'])
print("Shape of title matrix after one hot encodig ",title_bow.shape)
### BoW in Essay and Title on CV
print('===========================================================\n')
cv_essay = vectorizer_bow.transform(cv_X['clean_essay'])
print("Shape of essay matrix after one hot encodig on cv",cv_essay.shape)
cv_title = vectorizer_bowt.transform(cv_X['clean_project_title'])
print("Shape of title matrix after one hot encodig on cv",cv_title.shape)
### BoW in Essay and Title on Test
print('===========================================================\n')
ts_essay = vectorizer_bow.transform(ts_X['clean_essay'])
print("Shape of essay matrix after one hot encodig on test",ts_essay.shape)
ts_title = vectorizer_bowt.transform(ts_X['clean_project_title'])
print("Shape of title matrix after one hot encodig on test",ts_title.shape)
## Convert them into dense and standardize it
text_bow = text_bow.toarray()
# For essay in train data
text_scalar = StandardScaler()
text_scalar.fit(text_bow)
print(f"for essay in train data -> Mean : {text_scalar.mean_[0]}, Standard deviation : {np.sqrt(quantity_scalar.var_[0])}")
# Now standardize the data with above mean and variance.
text_normalized = text_scalar.transform(text_bow)
# For title in train data
title_bow = title_bow.toarray()
title_scalar = StandardScaler()
title_scalar.fit(title_bow)
print(f"for title in train data -> Mean : {title_scalar.mean_[0]}, Standard deviation : {np.sqrt(quantity_scalar.var_[0])}")
# Now standardize the data with above mean and variance.
title_normalized = title_scalar.transform(title_bow)
# Transform essay and title in cv data from prefit in train data
cv_essay = cv_essay.toarray()
cv_title = cv_title.toarray()
cv_essay_normalized = text_scalar.transform(cv_essay)
cv_title_normalized = title_scalar.transform(cv_title)
# Transform essay and title in test data from prefit in train data
ts_essay = ts_essay.toarray()
ts_title = ts_title.toarray()
ts_essay_normalized = text_scalar.transform(ts_essay)
ts_title_normalized = title_scalar.transform(ts_title)
print('Shape of normalized essay in train data', text_normalized.shape)
print('Shape of normalized title in train data', title_normalized.shape)
print('=======================================================\n')
print('Shape of normalized essay in cv data', cv_essay_normalized.shape)
print('Shape of normalized title in cv data', cv_title_normalized.shape)
print('=======================================================\n')
print('Shape of normalized essay in test data', ts_essay_normalized.shape)
print('Shape of normalized title in test data', ts_title_normalized.shape)
# # We are considering only the words which appeared in at least 10 documents(rows or projects).
vectorizer_tfidf = TfidfVectorizer(min_df=20)
text_tfidf = vectorizer_tfidf.fit_transform(tr_X['clean_essay'].values)
print("Shape of essay matrix after one hot encodig on train",text_tfidf.shape)
# # Similarly you can vectorize for title also
vectorizer_tfidft = TfidfVectorizer(min_df=20)
title_tfidf = vectorizer_tfidft.fit_transform(tr_X['clean_project_title'])
print("Shape of title matrix after one hot encodig on train ",title_tfidf.shape)
### TFIDF in Essay and Title on CV
print('===========================================================\n')
cv_essay = vectorizer_tfidf.transform(cv_X['clean_essay'])
print("Shape of essay matrix after one hot encodig on cv",cv_essay.shape)
cv_title = vectorizer_tfidft.transform(cv_X['clean_project_title'])
print("Shape of title matrix after one hot encodig on cv",cv_title.shape)
### TFIDF in Essay and Title on Test
print('===========================================================\n')
ts_essay = vectorizer_tfidf.transform(ts_X['clean_essay'])
print("Shape of essay matrix after one hot encodig on test",ts_essay.shape)
ts_title = vectorizer_tfidft.transform(ts_X['clean_project_title'])
print("Shape of title matrix after one hot encodig on test",ts_title.shape)
## Convert them into dense and standardize it
text_tfidf = text_tfidf.toarray()
# For essay and title in train data
text_scalar = StandardScaler()
text_scalar.fit(text_tfidf)
print(f"on Essay-> Mean : {text_scalar.mean_[0]}, Standard deviation : {np.sqrt(quantity_scalar.var_[0])}")
# Now standardize the data with above mean and variance.
text_normalized = text_scalar.transform(text_tfidf)
title_tfidf = title_tfidf.toarray()
title_scalar = StandardScaler()
title_scalar.fit(title_tfidf)
print(f"on Title-> Mean : {title_scalar.mean_[0]}, Standard deviation : {np.sqrt(quantity_scalar.var_[0])}")
# Now standardize the data with above mean and variance.
title_normalized = title_scalar.transform(title_tfidf)
# Transform essay and title in cv data from prefit in train data
cv_essay = cv_essay.toarray()
cv_title = cv_title.toarray()
cv_essay_normalized = text_scalar.transform(cv_essay)
cv_title_normalized = title_scalar.transform(cv_title)
# Transform essay and title in test data from prefit in train data
ts_essay = ts_essay.toarray()
ts_title = ts_title.toarray()
ts_essay_normalized = text_scalar.transform(ts_essay)
ts_title_normalized = title_scalar.transform(ts_title)
print('Shape of normalized essay in train data', text_normalized.shape)
print('Shape of normalized title in train data', title_normalized.shape)
print('=======================================================\n')
print('Shape of normalized essay in cv data', cv_essay_normalized.shape)
print('Shape of normalized title in cv data', cv_title_normalized.shape)
print('=======================================================\n')
print('Shape of normalized essay in test data', ts_essay_normalized.shape)
print('Shape of normalized title in test data', ts_title_normalized.shape)
# stronging variables into pickle files python: http://www.jessicayung.com/how-to-use-pickle-to-save-and-load-variables-in-python/
# make sure you have the glove_vectors file
with open('glove_vectors', 'rb') as f:
model = pickle.load(f)
glove_words = set(model.keys())
# average Word2Vec for train
# compute average word2vec for each essay.
avg_w2v_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(tr_X['clean_essay'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_vectors.append(vector)
print(len(avg_w2v_vectors))
print(len(avg_w2v_vectors[0]))
# average Word2Vec for train
# compute average word2vec for each title.
avg_w2v_title = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(tr_X['clean_project_title'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_title.append(vector)
print(len(avg_w2v_title))
print(len(avg_w2v_title[0]))
# average Word2Vec for cv
# compute average word2vec for each essay
avg_w2v_cv_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(cv_X['clean_essay'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_cv_vectors.append(vector)
print(len(avg_w2v_cv_vectors))
print(len(avg_w2v_cv_vectors[0]))
# average Word2Vec for cv
# compute average word2vec for each title
avg_w2v_cv_title = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(cv_X['clean_project_title'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_cv_title.append(vector)
print(len(avg_w2v_cv_title))
print(len(avg_w2v_cv_title[0]))
# average Word2Vec for test
# compute average word2vec for each essay
avg_w2v_ts_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(ts_X['clean_essay'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_ts_vectors.append(vector)
print(len(avg_w2v_ts_vectors))
print(len(avg_w2v_ts_vectors[0]))
# average Word2Vec for test
# compute average word2vec for each title
avg_w2v_ts_title = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(ts_X['clean_project_title'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in glove_words:
vector += model[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
avg_w2v_ts_title.append(vector)
print(len(avg_w2v_ts_title))
print(len(avg_w2v_ts_title[0]))
avg_w2v_vectors = np.array(avg_w2v_vectors)
# For essay and title in train data
text_scalar = StandardScaler()
text_scalar.fit(avg_w2v_vectors)
print(f"on Essay-> Mean : {text_scalar.mean_[0]}, Standard deviation : {np.sqrt(quantity_scalar.var_[0])}")
# Now standardize the data with above mean and variance.
text_normalized = text_scalar.transform(avg_w2v_vectors)
avg_w2v_title = np.array(avg_w2v_title)
title_scalar = StandardScaler()
title_scalar.fit(avg_w2v_title)
print(f"on Title-> Mean : {title_scalar.mean_[0]}, Standard deviation : {np.sqrt(quantity_scalar.var_[0])}")
# Now standardize the data with above mean and variance.
title_normalized = title_scalar.transform(avg_w2v_title)
# Tranform CV and Test data
avg_w2v_cv_vectors = np.array(avg_w2v_cv_vectors)
avg_w2v_cv_title = np.array(avg_w2v_cv_title)
cv_essay_normalized = text_scalar.transform(avg_w2v_cv_vectors)
cv_title_normalized = title_scalar.transform(avg_w2v_cv_title)
avg_w2v_ts_vectors = np.array(avg_w2v_ts_vectors)
avg_w2v_ts_title = np.array(avg_w2v_ts_title)
ts_essay_normalized = text_scalar.transform(avg_w2v_ts_vectors)
ts_title_normalized = title_scalar.transform(avg_w2v_ts_title)
print('Shape of normalized essay in train data', text_normalized.shape)
print('Shape of normalized title in train data', title_normalized.shape)
print('=======================================================\n')
print('Shape of normalized essay in cv data', cv_essay_normalized.shape)
print('Shape of normalized title in cv data', cv_title_normalized.shape)
print('=======================================================\n')
print('Shape of normalized essay in test data', ts_essay_normalized.shape)
print('Shape of normalized title in test data', ts_title_normalized.shape)
# stronging variables into pickle files python: http://www.jessicayung.com/how-to-use-pickle-to-save-and-load-variables-in-python/
# make sure you have the glove_vectors file
with open('glove_vectors', 'rb') as f:
model = pickle.load(f)
glove_words = set(model.keys())
# Tfidf weighted w2v on essay in train
tfidf_model = TfidfVectorizer()
tfidf_model.fit(tr_X['clean_essay'].values)
# we are converting a dictionary with word as a key, and the idf as a value
dictionary = dict(zip(tfidf_model.get_feature_names(), list(tfidf_model.idf_)))
tfidf_words = set(tfidf_model.get_feature_names())
# tfidf Word2Vec
# compute average word2vec for each essay
tfidf_w2v_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(tr_X['clean_essay'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_vectors.append(vector)
print(len(tfidf_w2v_vectors))
print(len(tfidf_w2v_vectors[0]))
# Tfidf weighted w2v on title in train
tfidf_model2 = TfidfVectorizer()
tfidf_model2.fit(tr_X['clean_project_title'].values)
# we are converting a dictionary with word as a key, and the idf as a value
dictionary2 = dict(zip(tfidf_model2.get_feature_names(), list(tfidf_model2.idf_)))
tfidf_words2 = set(tfidf_model2.get_feature_names())
# tfidf Word2Vec
# compute average word2vec for each title
tfidf_w2v_title = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(tr_X['clean_project_title'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_title.append(vector)
print(len(tfidf_w2v_title))
print(len(tfidf_w2v_title[0]))
# tfidf Word2Vec in essay on cv
# compute average word2vec for each essay
tfidf_w2v_vectors_cv = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(cv_X['clean_essay']): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_vectors_cv.append(vector)
print(len(tfidf_w2v_vectors_cv))
print(len(tfidf_w2v_vectors_cv[0]))
# tfidf Word2Vec on title on cv
# compute average word2vec for each title
tfidf_w2v_title_cv = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(cv_X['clean_project_title']): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_title_cv.append(vector)
print(len(tfidf_w2v_title_cv))
print(len(tfidf_w2v_title_cv[0]))
# average Word2Vec for test
# compute average word2vec for each essay
tfidf_w2v_ts_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(ts_X['clean_essay'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_ts_vectors.append(vector)
print(len(tfidf_w2v_ts_vectors))
print(len(tfidf_w2v_ts_vectors[0]))
# average Word2Vec for test
# compute average word2vec for each title
tfidf_w2v_ts_title = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(ts_X['clean_project_title'].values): # for each review/sentence
vector = np.zeros(300) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in glove_words) and (word in tfidf_words):
vec = model[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
tfidf_w2v_ts_title.append(vector)
print(len(tfidf_w2v_ts_title))
print(len(tfidf_w2v_ts_title[0]))
tfidf_w2v_vectors = np.array(tfidf_w2v_vectors)
# For essay and title in train data
text_scalar = StandardScaler()
text_scalar.fit(tfidf_w2v_vectors)
print(f"on Essay-> Mean : {text_scalar.mean_[0]}, Standard deviation : {np.sqrt(quantity_scalar.var_[0])}")
# Now standardize the data with above mean and variance.
text_normalized = text_scalar.transform(tfidf_w2v_vectors)
tfidf_w2v_title = np.array(tfidf_w2v_title)
title_scalar = StandardScaler()
title_scalar.fit(tfidf_w2v_title)
print(f"on Title-> Mean : {title_scalar.mean_[0]}, Standard deviation : {np.sqrt(quantity_scalar.var_[0])}")
# Now standardize the data with above mean and variance.
title_normalized = title_scalar.transform(tfidf_w2v_title)
# Now tranform test and cv and then standard them.
tfidf_w2v_vectors_cv = np.array(tfidf_w2v_vectors_cv)
tfidf_w2v_title_cv = np.array(tfidf_w2v_title_cv)
cv_essay_normalized = text_scalar.transform(tfidf_w2v_vectors_cv)
cv_title_normalized = title_scalar.transform(tfidf_w2v_title_cv)
tfidf_w2v_ts_vectors = np.array(tfidf_w2v_ts_vectors)
tfidf_w2v_ts_title = np.array(tfidf_w2v_ts_title)
ts_essay_normalized = text_scalar.transform(tfidf_w2v_ts_vectors)
ts_title_normalized = title_scalar.transform(tfidf_w2v_ts_title)
print('Shape of normalized essay in train data', text_normalized.shape)
print('Shape of normalized title in train data', title_normalized.shape)
print('=======================================================\n')
print('Shape of normalized essay in cv data', cv_essay_normalized.shape)
print('Shape of normalized title in cv data', cv_title_normalized.shape)
print('=======================================================\n')
print('Shape of normalized essay in test data', ts_essay_normalized.shape)
print('Shape of normalized title in test data', ts_title_normalized.shape)
# for train data
from scipy.sparse import hstack
tr_X = hstack((quantity_normalized, price_normalized, teacher_number_of_previously_posted_projects_normalized, \
school_state_one_hot, categories_one_hot, subcategories_one_hot, project_grade_category_one_hot, \
teacher_prefix_one_hot, text_normalized, title_normalized))
tr_X.shape
tr_X = tr_X.toarray()
# for cv data
cv_X = hstack((cv_quantity, cv_price, cv_teacher_number_of_previously_posted_projects, cv_school_state, \
cv_project_subject_category, cv_project_subject_subcategory,cv_project_grade_category, \
cv_teacher_prefix, cv_essay_normalized, cv_title_normalized))
cv_X.shape
cv_X = cv_X.toarray()
# for test data
# for cv data
ts_X = hstack((ts_quantity, ts_price, ts_teacher_number_of_previously_posted_projects, ts_school_state, \
ts_project_subject_category, ts_project_subject_subcategory,ts_project_grade_category, \
ts_teacher_prefix, ts_essay_normalized, ts_title_normalized))
ts_X.shape
ts_X = ts_X.toarray()
from sklearn.neighbors import KNeighborsClassifier
import tqdm
def batch_predict(clf, data):
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
y_data_pred = []
tr_loop = data.shape[0] - data.shape[0]%1000
# consider you X_tr shape is 49041, then your tr_loop will be 49041 - 49041%1000 = 49000
# in this for loop we will iterate unti the last 1000 multiplier
for i in range(0, tr_loop, 1000):
y_data_pred.extend(clf.predict_proba(data[i:i+1000])[:,1])
# we will be predicting for the last data points
if data.shape[0]%1000 !=0:
y_data_pred.extend(clf.predict_proba(data[tr_loop:])[:,1])
return y_data_pred
def knnbrutealgo(X, y, cv_X, cv_y):
"""
Parameters:
X - train feature data
y- train class data
cv_X - valid feature data
cv_y - valid class data
Return:
Print the AUC score of CV data
"""
tr_score = []
cv_score = []
index = 0
for i in tqdm.tqdm_notebook([1,5,11,25,51,101]):
# Create knn model instance
knn = KNeighborsClassifier(n_neighbors=i, algorithm='brute')
# Fit the model with train data
knn.fit(X,y)
# Predict the cv data
predict_cv = batch_predict(knn, cv_X)
predict_tr = batch_predict(knn, X)
# Evaluate accuracy to see how much it corrected class label
tr_score.append(metrics.roc_auc_score(y, predict_tr))
cv_score.append(metrics.roc_auc_score(cv_y, predict_cv))
print('\nTrain AUC and CV AUC score for k:{0} is {1} , {2}'.format(i,tr_score[index],cv_score[index]))
index += 1
return tr_score, cv_score
def plotauc_tr_cv(feature_name, n_list, X_score, cv_X_score):
"""
Parameters:
k - number of neighbors
X_score - Train AUC score
cv_X_score - CV AUC score
Return:
Save FPR, TRP and ROC for train data and Plot the graph of Train and CV data
"""
plt.plot(n_list, X_score, label='Train AUC')
plt.plot(n_list, cv_X_score, label='CV AUC')
plt.scatter(n_list, X_score)
plt.scatter(n_list, cv_X_score)
plt.legend()
plt.xlabel('Hyperparameter(k) ')
plt.ylabel('AUC Score')
plt.title('Train AUC vs CV AUC plot with {0} features'.format(feature_name))
plt.show()
def plotauc_tr_ts(k, feature_name, X, y, ts_X, ts_y):
"""
Parameters:
k = number of neigbhors
feature_name - (string) Write feature to print the plot title
X - train feature data
y - train class data
fpr - FPR value for train data
tpr - TPR value for train data
roc_auc - AUC value of train data
Return:
Save the prediction of test data and plot the graph for Train and Test data
"""
knn = KNeighborsClassifier(n_neighbors=k, algorithm='brute')
# Fit the model with train data
knn.fit(X,y)
tr_predict = batch_predict(knn, X)
ts_predict = batch_predict(knn, ts_X)
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
fpr, tpr, tr_thre = roc_curve(y, tr_predict)
roc_auc = auc(fpr, tpr)
fpr_t = dict()
tpr_t = dict()
roc_auc_t = dict()
fpr_t, tpr_t, _ = roc_curve(ts_y, ts_predict)
roc_auc_t = auc(fpr_t, tpr_t)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot(fpr_t, tpr_t, color='blue',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc_t)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC With Maximum AUC on KNN Classifier for k={0} on {1} features'.format(k,feature_name))
plt.legend(loc="lower right")
plt.show()
return tr_thre, fpr, tpr, tr_predict, ts_predict
# we are writing our own function for predict, with defined thresould
# we will pick a threshold that will give the least fpr
def find_best_threshold(threshould, fpr, tpr):
t = threshould[np.argmax(tpr*(1-fpr))]
# (tpr*(1-fpr)) will be maximum if your fpr is very low and tpr is very high
print("the maximum value of tpr*(1-fpr)", max(tpr*(1-fpr)), "for threshold", np.round(t,3))
return t
def predict_with_best_t(proba, threshould):
predictions = []
for i in proba:
if i>=threshould:
predictions.append(1)
else:
predictions.append(0)
return predictions
def plot_cm(feature_names, tr_thresholds, train_fpr, train_tpr, y_train, y_train_pred, y_test, y_test_pred):
"""
Parameters:
k = number of neigbhors
feature_name - (string) Write feature to print the plot title
y_true - test class data
y_pred - test prediction value
Return:
Plot the confusion matrix
"""
best_t = find_best_threshold(tr_thresholds, train_fpr, train_tpr)
print("Train confusion matrix")
# print(confusion_matrix(y_train, predict_with_best_t(y_train_pred, best_t)))
cm = metrics.confusion_matrix(y_train, predict_with_best_t(y_train_pred, best_t))
plt.figure(figsize = (10,7))
sns.heatmap(cm, annot=True, fmt="d")
plt.xlabel('Predicted Class')
plt.ylabel('True Class')
plt.title('Confusion matrix for Train Data when KNN with {0} features'.format(feature_names))
print("Test confusion matrix")
# print(confusion_matrix(y_test, predict_with_best_t(y_test_pred, best_t)))
cm = metrics.confusion_matrix(y_test, predict_with_best_t(y_test_pred, best_t))
plt.figure(figsize = (10,7))
sns.heatmap(cm, annot=True, fmt="d")
plt.xlabel('Predicted Class')
plt.ylabel('True Class')
plt.title('Confusion matrix for Test Data when KNN with {0} features'.format(feature_names))
# Please write all the code with proper documentation
tr_score, cv_score = knnbrutealgo(tr_X, tr_y, cv_X, cv_y)
Observation: We found that k=101 got the maximum AUC score for kNN
Note: I performed only 6 values because of taking long computation time.
len(tr_score), len(cv_score)
plotauc_tr_cv('BoW', [1,5,11,25,51,101], tr_score, cv_score)
tr_thre, fpr, tpr, tr_predict, ts_predict = plotauc_tr_ts(101, 'BoW', tr_X, tr_y, ts_X, ts_y)
plot_cm('BoW', tr_thre, fpr, tpr, tr_y, tr_predict, ts_y, ts_predict)
# Please write all the code with proper documentation
tr_score, cv_score = knnbrutealgo(tr_X, tr_y, cv_X, cv_y)
Observation: we observe that for k=51 in kNN got the maximum AUC score.
plotauc_tr_cv('TFIDF', [1,5,11,25,51,101], tr_score, cv_score)
tr_thre, fpr, tpr, tr_predict, ts_predict = plotauc_tr_ts(51, 'TFIDF', tr_X, tr_y, ts_X, ts_y)
plot_cm('TFIDF', tr_thre, fpr, tpr, tr_y, tr_predict, ts_y, ts_predict)
# Please write all the code with proper documentation
tr_score, cv_score = knnbrutealgo(tr_X, tr_y, cv_X, cv_y)
Observation: we observe that for k=101 in kNN got the maximum AUC score.
plotauc_tr_cv('AVG W2V', [1,5,11,25,51,101], tr_score, cv_score)
tr_thre, fpr, tpr, tr_predict, ts_predict = plotauc_tr_ts(101, 'AVG W2V', tr_X, tr_y, ts_X, ts_y)
plot_cm('AVG W2V', tr_thre, fpr, tpr, tr_y, tr_predict, ts_y, ts_predict)
# Please write all the code with proper documentation
tr_score, cv_score = knnbrutealgo(tr_X, tr_y, cv_X, cv_y)
Observation: we observe that for k=101 in kNN got the maximum AUC score.
plotauc_tr_cv('TFIDF W2V', [1,5,11,25,51,101], tr_score, cv_score)
tr_thre, fpr, tpr, tr_predict, ts_predict = plotauc_tr_ts(101, 'TFIDF W2V', tr_X, tr_y, ts_X, ts_y)
plot_cm('TFIDF W2V', tr_thre, fpr, tpr, tr_y, tr_predict, ts_y, ts_predict)
# please write all the code with proper documentation, and proper titles for each subsection
# go through documentations and blogs before you start coding
# first figure out what to do, and then think about how to do.
# reading and understanding error messages will be very much helpfull in debugging your code
# when you plot any graph make sure you use
# a. Title, that describes your plot, this will be very helpful to the reader
# b. Legends if needed
# c. X-axis label
# d. Y-axis label
### 1) quantity
# We are going to represent the quantity, as numerical values within the range 0-1
# normalization sklearn: https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
from sklearn.preprocessing import MinMaxScaler
quantity_scalar = MinMaxScaler()
quantity_scalar.fit(tr_X['quantity'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
quantity_normalized = quantity_scalar.transform(tr_X['quantity'].values.reshape(-1, 1))
### 2) price
# the cost feature is already in numerical values, we are going to represent the money, as numerical values within the range 0-1
# normalization sklearn: https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
price_scalar = MinMaxScaler()
price_scalar.fit(tr_X['price'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
price_normalized = price_scalar.transform(tr_X['price'].values.reshape(-1, 1))
### 3) For teacher_number_of_previously_projects
# We are going to represent the teacher_number_of_previously_posted_projects, as numerical values within the range 0-1
# normalization sklearn: https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
teacher_number_of_previously_posted_projects_scalar = MinMaxScaler()
teacher_number_of_previously_posted_projects_scalar.fit(tr_X['teacher_number_of_previously_posted_projects'].values.reshape(-1,1))
teacher_number_of_previously_posted_projects_normalized = teacher_number_of_previously_posted_projects_scalar.transform(tr_X['teacher_number_of_previously_posted_projects'].values.reshape(-1, 1))
# Transform numerical attributes for test data
ts_price = price_scalar.transform(ts_X['price'].values.reshape(-1,1))
ts_quantity = quantity_scalar.transform(ts_X['quantity'].values.reshape(-1,1))
ts_teacher_number_of_previously_posted_projects = \
teacher_number_of_previously_posted_projects_scalar.transform(ts_X['teacher_number_of_previously_posted_projects'].\
values.reshape(-1,1))
# tranform nmerical attributes for cv data
cv_price = price_scalar.transform(cv_X['price'].values.reshape(-1,1))
cv_quantity = quantity_scalar.transform(cv_X['quantity'].values.reshape(-1,1))
cv_teacher_number_of_previously_posted_projects = \
teacher_number_of_previously_posted_projects_scalar.transform(cv_X['teacher_number_of_previously_posted_projects'].\
values.reshape(-1,1))
print('--------------Test data--------------')
print('Shape of quantity:', ts_quantity.shape)
print('Shape of price:', ts_price.shape)
print('Shape of teacher_number_of_previously_posted_projects:', ts_teacher_number_of_previously_posted_projects.shape)
print('--------------CV data--------------')
print('Shape of quantity:', cv_quantity.shape)
print('Shape of price:', cv_price.shape)
print('Shape of teacher_number_of_previously_posted_projects:', cv_teacher_number_of_previously_posted_projects.shape)
# # We are considering only the words which appeared in at least 10 documents(rows or projects).
vectorizer_tfidf = TfidfVectorizer(min_df=10)
text_tfidf = vectorizer_tfidf.fit_transform(tr_X['clean_essay'].values)
print("Shape of essay matrix after one hot encodig on train",text_tfidf.shape)
# # Similarly you can vectorize for title also
vectorizer_tfidft = TfidfVectorizer(min_df=10)
title_tfidf = vectorizer_tfidft.fit_transform(tr_X['clean_project_title'])
print("Shape of title matrix after one hot encodig on train ",title_tfidf.shape)
### TFIDF in Essay and Title on CV
print('===========================================================\n')
cv_essay = vectorizer_tfidf.transform(cv_X['clean_essay'])
print("Shape of essay matrix after one hot encodig on cv",cv_essay.shape)
cv_title = vectorizer_tfidft.transform(cv_X['clean_project_title'])
print("Shape of title matrix after one hot encodig on cv",cv_title.shape)
### TFIDF in Essay and Title on Test
print('===========================================================\n')
ts_essay = vectorizer_tfidf.transform(ts_X['clean_essay'])
print("Shape of essay matrix after one hot encodig on test",ts_essay.shape)
ts_title = vectorizer_tfidft.transform(ts_X['clean_project_title'])
print("Shape of title matrix after one hot encodig on test",ts_title.shape)
from sklearn.preprocessing import MinMaxScaler
text_tfidf = text_tfidf.toarray()
# For essay and title in train data
text_scalar = MinMaxScaler()
text_scalar.fit(text_tfidf)
# print(f"on Essay-> Mean : {text_scalar.mean_[0]}, Standard deviation : {np.sqrt(quantity_scalar.var_[0])}")
# Now standardize the data with above mean and variance.
text_normalized = text_scalar.transform(text_tfidf)
title_tfidf = title_tfidf.toarray()
title_scalar = MinMaxScaler()
title_scalar.fit(title_tfidf)
# print(f"on Title-> Mean : {title_scalar.mean_[0]}, Standard deviation : {np.sqrt(quantity_scalar.var_[0])}")
# Now standardize the data with above mean and variance.
title_normalized = title_scalar.transform(title_tfidf)
cv_essay = cv_essay.toarray()
cv_title = cv_title.toarray()
cv_essay_normalized = text_scalar.transform(cv_essay)
cv_title_normalized = title_scalar.transform(cv_title)
# Transform essay and title in test data from prefit in train data
ts_essay = ts_essay.toarray()
ts_title = ts_title.toarray()
ts_essay_normalized = text_scalar.transform(ts_essay)
ts_title_normalized = title_scalar.transform(ts_title)
print('Shape of normalized essay in train data', text_normalized.shape)
print('Shape of normalized title in train data', title_normalized.shape)
print('=======================================================\n')
print('Shape of normalized essay in cv data', cv_essay_normalized.shape)
print('Shape of normalized title in cv data', cv_title_normalized.shape)
print('=======================================================\n')
print('Shape of normalized essay in test data', ts_essay_normalized.shape)
print('Shape of normalized title in test data', ts_title_normalized.shape)
# Merge Them
# for train data
from scipy.sparse import hstack
tr_X = hstack((quantity_normalized, price_normalized, teacher_number_of_previously_posted_projects_normalized, \
school_state_one_hot, categories_one_hot, subcategories_one_hot, project_grade_category_one_hot, \
teacher_prefix_one_hot, text_normalized, title_normalized))
tr_X = tr_X.toarray()
tr_X.shape
# for cv data
cv_X = hstack((cv_quantity, cv_price, cv_teacher_number_of_previously_posted_projects, cv_school_state, \
cv_project_subject_category, cv_project_subject_subcategory,cv_project_grade_category, \
cv_teacher_prefix, cv_essay_normalized, cv_title_normalized))
cv_X = cv_X.toarray()
cv_X.shape
# for test data
# for cv data
ts_X = hstack((ts_quantity, ts_price, ts_teacher_number_of_previously_posted_projects, ts_school_state, \
ts_project_subject_category, ts_project_subject_subcategory,ts_project_grade_category, \
ts_teacher_prefix, ts_essay_normalized, ts_title_normalized))
ts_X = ts_X.toarray()
ts_X.shape
from sklearn.feature_selection import SelectKBest, chi2
kbest = SelectKBest(chi2, k=2000)
kbest.fit(tr_X, tr_y)
tr_X_new = kbest.transform(tr_X)
cv_X_new = kbest.transform(cv_X)
ts_X_new = kbest.transform(ts_X)
# Please write all the code with proper documentation
tr_score, cv_score = knnbrutealgo(tr_X_new, tr_y, cv_X_new, cv_y)
Observation: we observe that for k=101 in kNN got the maximum AUC score.
plotauc_tr_cv('SelectkBest TFIDF', [1,5,11,25,51,101], tr_score, cv_score)
tr_thre, fpr, tpr, tr_predict, ts_predict = plotauc_tr_ts(101, 'SelectKBestTFIDF', tr_X_new, tr_y, ts_X_new, ts_y)
plot_cm('SelectKBest TFIDF', tr_thre, fpr, tpr, tr_y, tr_predict, ts_y, ts_predict)
# Please compare all your models using Prettytable library
# http://zetcode.com/python/prettytable/
from prettytable import PrettyTable
x = PrettyTable()
x.field_names = ["Features", "Model", "Hyperparameter 'k'", "Maximum AUC score",]
x.add_row(["BoW","Brute", 101, 0.5886356214303946])
x.add_row(["TFIDF","Brute", 51, 0.5736983787094736])
x.add_row(["AvgW2V","Brute", 101, 0.6361381972743773])
x.add_row(["TDIDFW2V","Brute", 101, 0.6562650924482066])
x.add_row(["2000 SelectKbest from TFIDF","Brute",101,0.5730077460317462])
print(x)